The machine is stopped to put it in a consistent state (OS shutdown, disks cleanly unmounted etc.), then started again, and the qemu process itself performs the backup while letting the guest run, backup up blocks the guest wants to write to early to not stall it for too long.
Well that's not...
It would probably be better that way, but given that it's using 0666 permissions it shouldn't be much of an issue for 99% of the cases. I'll ask upstream.
It'll get more confusing in the future - cgroup v2 separated memory and swap ;-) (then again the adoption of that is going about as fast as the adoption of ipv6...)
Right. More info would be useful. (And eg. if you put an interface onto a bridge you shouldn't also give it the IP address. That should be only on the bridge (including the dns settings).)
Actually, if you don't need to do anything more complex I recommend not putting files in...
Once thing I noticed:
bridge_ports eth0
while your interface's actual name is enp0s25
Other than that, you'll have to be more specific as to what's not working ;-)
There was a change to support mounting without gid=5 for unprivileged containers which have gid 5 not mapped. Upstream has already fixed this in the git-master branch, and we'll probably build a fixed package soon.
Next time this happens, can you please post the output of `systemctl status $vmid.scope` before doing a systemctl kill to see which processes are running in the scope.
There's a long term plan to try to test rules better when applying them to find a way to somehow mark them on the UI or at least show specifically which rule doesn't work in the logs, but the low level tools make this quite difficult/inconvenient so for now this isn't happening - we're trying to...
A quick glance over the compile.txt shows a rather long multiport line (in the 'cpanel' group's input). pve-firewall >= 3.0-6 should actually complain about this when entering it, see if it works when you remove it, if that helps, split it into multiple rules.
Gibt es - und funktioniert - /lib/systemd/systemd im container? Der Fehler-ausgabe von lxc nach müsste entweder das file nicht existieren - oder IIRC wenn der linker ein library nicht findet gibt es den fehler auch:
# chroot --userspec=nobody /var/lib/lxc/101/rootfs /lib/systemd/systemd --help...
You could check if a vlan-aware linux bridge with `/sys/class/net/vmbr0/bridge/vlan_protocol` set to 0x88a8 would work. We do currently only expose the vlan filtering flag in the network settings, adding the vlan protocol might be useful for some. Currently this has to be added manually with...
Sinnvollerweise hab ich beim `ls` den `/` for `sbin/init` dazugeschrieben was natürlich falsch ist und den symlink vom host statt dem container zeigt. (also eigtl hätte es nach dem `cd` dann `ls -ld sbin sbin/init` (ohne den vorangestellten slashes) heißen müssen).
Hier ist die Fehlermeldung `No such file or directory - Failed to exec "/sbin/init".` interessant.
Mounte mal den container und schau ob da was fehlt bzw falsche symlinks oder so sind. Bei einem Debian container sollte das ein symlink auf /lib/systemd/systemd sein.
Das sollte in etwa so...
Unprivileged containers are not allowed to create device nodes. The postfix setup in your container uses a chrooted environment with some device nodes created in /var/spool/postfix/dev. In order to use it in an unprivileged container you first have to get rid of those files in the container you...
Mh, you might be able to get around this with an smb/cifs mount setting dir_mode and file_mode to allow reading of all files to all users and use that for templates if that's less of a hassle for you than trying pve-container>=2.0-21 from pvetest. For smb/cifs you'd have to mount it manually and...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.