Proxmox 7.1 can not run > 1 CT

kotakomputer

Renowned Member
May 14, 2012
429
13
83
Jakarta, Indonesia
www.proxmoxindo.com
Upgrading to 7.1 my VPS working fine, but if I change the IP (and also Mac Address) then RTO. So, I switch back to old IP (with new Mac Address, because I don't remember my old MAC Address) then still RTO.

Look like the new Mac Address mechanism in Proxmox 7.1 caused this error? Any solution?
 
Last edited:
Switch back to version 5 then CT running fine.

I see this bug occurred while:

- Server installed by Proxmox 7.1 (not upgrading from 6 or 5)
- Create 2 CT (in this case I'm using Centos 7 from Proxmox templates)
- Run 1 CT is fine, but if I run the second CT then I can not even access any local storage or Ping
- Shutdown 1 CT again, so only 1 CT running then fine again
 
sorry but what is "RTO" ?
Look like the new Mac Address mechanism in Proxmox 7.1 caused this error? Any solution?
which new mac address mechanism do you mean?

can you post both vm configs and your host network config?
 
NB: I change the Subject to: Proxmox 7.1 can not run > 1 CT (because this issue not related to IP/Mac Address)

After some simulations, I see this bug not related to MAC Address.

Based on https://pve.proxmox.com/pve-docs/chapter-pct.html#pct_cgroup_compat then I add to grub:

Code:
GRUB_CMDLINE_LINUX_DEFAULT="systemd.unified_cgroup_hierarchy=0 quiet"
update-grub then reboot Server.

I can run only 1 CT, but when second CT run then I can not access any local-storage, as shown on this image.

NB:
1. I also try to add "unprivileged: 1" but still not help
2. RTO: Request Time Out

1644584509306.png

Code:
# cat /etc/network/interfaces
auto lo
iface lo inet loopback

iface ens1f0 inet manual

auto vmbr0
#iface vmbr0 inet dhcp
iface vmbr0 inet static
        address 202.x.x.x/x
        gateway 202.x.x.x
        bridge-ports ens1f0
        bridge-stp off
        bridge-fd 0

iface ens1f1 inet manual

iface eno1 inet manual

iface eno2 inet manual
#

For this testingf I don't set IP to the CT:
Code:
# cat /etc/pve/lxc/100.conf
arch: amd64
cores: 4
hostname: test1
memory: 64000
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=4E:99:EC:F6:30:CD,type=veth
ostype: centos
rootfs: local-data:100/vm-100-disk-0.raw,size=100G
swap: 512
unprivileged: 1
#
 
Last edited:
well running multiple containers definitely work in general... my guess is that your containers do something hd intensive and the disk is overloaded?
can you post the output of 'dmesg' as well as the journal from when both containers are running?

also the config of the second container is interesting...
 
Testing on Ubuntu 18.04 and 20.04 still error.

This is new server with only 2 CT running at same time, specs: HP DL360P Xeon 20 core x 2, RAM 128GB, SSD Samsung 2TB. Another server running Proxmox 6 is fine, only Server with Proxmox 7.1 has this issue.

Config of 101 CT:
Code:
# cat /etc/pve/lxc/101.conf
arch: amd64
cores: 4
hostname: test2
memory: 32000
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=F6:FB:BF:15:34:B1,type=veth
ostype: centos
rootfs: local-data:101/vm-101-disk-0.raw,size=110G
swap: 512
unprivileged: 1
#

# demsg
Code:
[232437.052648] audit: type=1400 audit(1644809520.310:70): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-100_</var/lib/lxc>" pid=2986530 comm="apparmor_parser"
[232438.170738] fwbr100i0: port 1(fwln100i0) entered blocking state
[232438.170753] fwbr100i0: port 1(fwln100i0) entered disabled state
[232438.170977] device fwln100i0 entered promiscuous mode
[232438.171148] fwbr100i0: port 1(fwln100i0) entered blocking state
[232438.171155] fwbr100i0: port 1(fwln100i0) entered forwarding state
[232438.178090] vmbr0: port 2(fwpr100p0) entered blocking state
[232438.178097] vmbr0: port 2(fwpr100p0) entered disabled state
[232438.178237] device fwpr100p0 entered promiscuous mode
[232438.178348] vmbr0: port 2(fwpr100p0) entered blocking state
[232438.178352] vmbr0: port 2(fwpr100p0) entered forwarding state
[232438.185608] fwbr100i0: port 2(veth100i0) entered blocking state
[232438.185619] fwbr100i0: port 2(veth100i0) entered disabled state
[232438.185822] device veth100i0 entered promiscuous mode
[232438.239645] eth0: renamed from vetholDzff
[232439.447501] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[232439.447693] fwbr100i0: port 2(veth100i0) entered blocking state
[232439.447701] fwbr100i0: port 2(veth100i0) entered forwarding state
[232448.310290] loop1: detected capacity change from 0 to 230686720
[232448.392487] EXT4-fs (loop1): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
[232448.933154] audit: type=1400 audit(1644809532.190:71): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-101_</var/lib/lxc>" pid=2988186 comm="apparmor_parser"
[232450.041157] fwbr101i0: port 1(fwln101i0) entered blocking state
[232450.041171] fwbr101i0: port 1(fwln101i0) entered disabled state
[232450.041352] device fwln101i0 entered promiscuous mode
[232450.041499] fwbr101i0: port 1(fwln101i0) entered blocking state
[232450.041504] fwbr101i0: port 1(fwln101i0) entered forwarding state
[232450.048609] vmbr0: port 3(fwpr101p0) entered blocking state
[232450.048616] vmbr0: port 3(fwpr101p0) entered disabled state
[232450.048797] device fwpr101p0 entered promiscuous mode
[232450.048917] vmbr0: port 3(fwpr101p0) entered blocking state
[232450.048923] vmbr0: port 3(fwpr101p0) entered forwarding state
[232450.055583] fwbr101i0: port 2(veth101i0) entered blocking state
[232450.055592] fwbr101i0: port 2(veth101i0) entered disabled state
[232450.055780] device veth101i0 entered promiscuous mode
[232450.105990] eth0: renamed from veth3alfOH
[232451.382178] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[232451.382380] fwbr101i0: port 2(veth101i0) entered blocking state
[232451.382387] fwbr101i0: port 2(veth101i0) entered forwarding state
#
 
I have the exact same issue, tried upgrading, removing container, doing nested/unnested/unprivliged/priviliged etc... no avail.

SOLVED: Not sure why, but changing IP fixed it, maybe DHCP or colliding IP in the above range (in our case)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!