Which ubuntu version are you running in the container? There's not much to see in those files. LXC starts the container's init which just fails with not much info in between and the journal doesn't seem to contain any messages relating to container 119. Maybe you'll get more information when...
Inside the container, yes, via quota-tools (quotacheck/edquota & friends). Supporting this will require some bigger changes to how we start up containers.
This will eventually be supported. BTRFS has some quirks we need to deal with. Eg. merely enabling quota support on a btrfs file system causes a small but measurable performance impact, using btrfs send on a subvolume does not include the actual quota limits (so we need to copy this manually on...
If it's just for data and not critical for system startup (iow. it's not your /etc or /usr mountpoint ;-)), you probably want to add nofail to the fstab entry (see man 5 systemd.mount). When using it as a storage for PVE you can use the is_mountpoint storage option to tell pve to check that it's...
So about booting a degraded btrfs: yes, you'll need to use the `rootflags` grub option there, or wait for the initramfs to pop up and then mount it manually to `/root` via `mount -o degraded /dev/sdXY /root` and hit Ctrl+D.
You can of course add a custom grub entry to boot in degraded state...
Seems to be caused by differences in how the devices controller in cgroupv1 behaves vs what lxc emulates. We'll probably fix this by rolling out a default config for cgroupv2-devices to restore the previous behavior.
Both should work fine though. Have you by any chance been using systemd from backports? For the non-bpo version that boot option should have been the default anyway, with the version the unified one is the default, but the old setting should still be fully functional.
Could you tell us what parameters you had set there? In theory some things like for example moving only a subset of cgroups to v2 *could* work with lxc (but I wouldn't recommend it for production use).
I'd recommend against putting raid-capable file systems on hardware raid.
It'll still detect errors, but it will not be able to recover from them. You'd be gaining very little.
And given the issues people have been facing with ZFS in that regard I'm generally wary of such setups.
The only other thing we currently semi-expose is the `trunks` option you can configure only via the command line (see the qm(1) and pct(1) man pages on how to use their 'set' subcommand), this corresponds to using `bridge vlan add dev <iface> vid <ids>`. Note that any custom changes you do...
For vlan aware bridges it is possible to directly configure the vlans for each port connected to the bridge. (Which vlan ids should pass through, which should get tagged/untagged along the way).
Without this setting, each vlan tag gets its vlan-bridge. This only works if the selected bridge is...
Correct, thin pools don't have a file system directly on them.
However, I believe resizing this way may have only resized the data portion of the thin volume, not the metadata.
This may become a problem in the future, so you need to monitor the `Meta%` value in the `lvs` output, or extend the...
beside the devices cgroup, apparmor and possible `nodev` mount flags this also needs `CAP_SYS_RAWIO` which is dropped by default for containers, you can add an empty `lxc.cap.drop` line to the config to clear the dropped capability list then add a 2nd such line with the default entries you find...
Also, are you sure you want to map the user `1000` to be the user `1010`? If so, I think the `subu/gid` ranges also need to be adapted.
EDIT: Just read the backlog. Yeah you want to change the lines from `x 1000 1010 10` to `x 1000 1000 10` and start the range after it with `1010` and bump the...
Looks like you're missing a mapping for `1010`.
Either bump the `1000` entries to contain 11 users (1000 through including 1010), or start the next range at `1010`.
Yay for counting from zero ;-)
There's already another code for the v2 freezer, but it's currently not being used and apparently lxc doesn't provide any path at all when already on a pure cgroup v2 setup and querying it explicitly for the "unified" cgroup. This will be fixed with the next pve-container update.
The step it fails at is the freeze step, which happens via cgroups, where we first connect to the container's monitor to query the exact cgroup paths.
-) Have you done any cgroup specific changes to your host (eg. switch to cgroup v2?)
-) Can you post the output of the following comands...
An alternative to the systemd overrides would be to allow systemd to do its thing (but I only recommend this for unprivileged containers) via
# Append to /etc/pve/lxc/<arch ct ids>.conf
lxc.apparmor.raw: mount fstype=proc options=(nosuid,nodev,noexec,rw) -> /run/systemd/unit-root/proc/,
(Note...
the current packages don't handle the `--encryption-key` CLI parameter on pvesm correctly, the file has to be manually created via
`proxmox-backup-client key create --kdf=none /etc/pve/priv/storage/STORAGENAME.enc`
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.