I had the same exact problem.
My error exactly:
Turns out the container was configured with eth0 using vmbr1, which didn't exist and should be vmbr0 instead.
Upon changing the network setting, the container started up immediately with no errors. It appears the error message is very misleading.
I will keep an eye on this report and add my setup if it happens again. So far it only happened once after 90 days of uptime and we use bult-in PVE replication a lot (every 15 minutes for many LXC containers) using ZFS send/receive of course.
I seem to have a similar issue but the stack looks like this:
root@ex1:~# cat /proc/1005172/stack
[<0>] taskq_cancel_id+0xdb/0x110 [spl]
[<0>] zfs_unlinked_drain_stop_wait+0x47/0x70 [zfs]
[<0>] zfsvfs_teardown+0x25/0x2f0 [zfs]
[<0>] zfs_suspend_fs+0x10/0x20 [zfs]
The rules do compile. Everything is applied fine from PVE side of things. I can see the rules configured in iptables, nftables, ipset, etc.
The only problem is that the kernel does not even process packets through them because of net.bridge.bridge-nf-call-iptables=0.
There should be a mechanism...
The firewall rules are obviously configured via the PVE web interface, it is the PVE firewall after all.
If bridge-nf-filter-vlan-tagged doesn't get set then the setup I described above wouldn't have firewall operational.
However, I queried the status of all those sysctl settings and...
In most cases where IPs are routed through the PVE host, the bridge-nf-call-* settings do not need to be enabled for PVE Firewall to work.
However, we have recently switched to using a vlan-aware bridge on the host and configure the VLAN ID directly in Proxmox for each container/VM interface...
You can easily ask for a 3 hour KVM access and send them a link to ISO to be burned on flash drive and attached. They'll happily do that every time for free, that's how I always install my Proxmox servers.
Ever since Proxmox 6 came out, we moved towards using Proxmox with ZFS across all physical servers. That is because with the new corosync, we don't need to have multicast traffic between those servers. We have successfully virtualized servers that were previously on bare metal and even started...
We just experienced a nasty crash whenever kernel touched our ZFS pool. This occured after we replaced one faulty drive and resilvered but in fact had nothing to do with that.
The crash bug occurs when ZFS is trying to reapply ZIL due to a previous power loss. The issues linked below document...
Is there a way I need to "initialize" tag (4001) in OVS specifically? It's true that I don't use this tag on the host, it's only for the instances in this case. I'll bring up an interface on the host with that tag to give it a try.
EDIT: No, adding an interface (vlan4001) on the host and having...
The problem is that when starting LXC container first time, creating LXC container OR after PVE host is rebooted (and therefore OVS configuration is reset, since PVE does not use persistent OVS DB), the virtual interface plugged into OVS port (vmbr0 here with VLAN tag 4001) does not work.