I'm getting the above warning on an unprevlidged arch linux container, that runs a recent systemd version.
Any idea?
Do i have some modified app armour profile somewhere in the system?
Here are the details:
Config of container
arch: amd64
cpulimit: 4
cpuunits: 1024
features: nesting=1...
Can anyone confirm that this is a grub + zfs Boot only problem. So it's fine to switch to newer zfs features such as zstd compression on _other_ pools. They should import fine with pve-manager/6.4-13/9f411e79 (running kernel: 5.4.128-1-pve)?
I can confirm this issue with proxmox 6.0-1 iso put on an USB stick. This same test hardware is able to boot linux from usb sticks without issue, e.g. manjaro.
There seems to be a serious regression here.
Sometimes solutions are very easy:
systemctl disable docker && reboot
I activated docker some time ago and just forgot about it. Also didn't know that it messes that much with my network settings.
To be specific:
I'm accessing proxmox by static ip network (coming over wifi->ethernet->eth0, no bridge), so only using eth0, opening an ssh session. From that session on the proxmox host I can access pfsense - i guess using the bridge? So yes the bridge seems to partially work but only from...
Thats a very good point, I already checked that in the morning. Unplugged the cable and dmesg showed the interface going down.
Also the connection from my client through eth0 to the proxmox host works, so they can't have changed. Or do you mean the bridge names or even something different?
Actual problem:
No traffic on my bridge network to virtualized firewall.
Network setup related to firewall, see also interfaces config file:
virtio net0 <-> vmbr1 <-> eth1 <--> WAN
virtio net1 <-> vmbr0 <-> eth0 <--> LAN, Wifi, Clients and VMs all on the same network, no DMZ
What happened...
You're right. It's enough backing up /rpool (mounts to /:
rpool on /rpool type zfs (rw,noatime,xattr,noacl) on my system), however just make sure you have a USB stick with ZFS support to be able to restore a snapshot, etc. in case you can't boot anymore. The arch linux one works very well for...
Was your original bridge vmbr1 affected by the two additional bridges (vmbr20, vmbr30), i.e. the traffic that was not tagged did still go through? As soon as I add a bridge for vlan (i.e. using eth1.2 -> vlan2) the (existing) traffic on eth1 is interrupted.
Also, please share your final...
However I just got this error when halting the container:
[....] Unmounting local filesystems...umount: /mnt/testPool: block devices are not permitted on filesystem
failed.
mount: cannot mount rpool/subvol-202-disk-1 read-only
[info] Will now halt.
vm 202 - unable to parse value of 'mp0' -...
It works with the following configuration, but not with the commented entry. Which one is the recommended way of doing it?
#mp0: /mnt/testPool mp=/rpool/testPool/for_202
lxc.mount.entry: /rpool/testPool/for_202 mnt/testPool none bind,create=dir,optional 0 0
Adding a bind mount (zfs pool directory) to my container results in an error when starting it:
lxc-start 1465211058.722 INFO lxc_start_ui - lxc_start.c:main:264 - using rcfile /var/lib/lxc/202/config
lxc-start 1465211058.722 WARN lxc_confile - confile.c:config_pivotdir:1817...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.