Podman inside unprivileged Alpine container fails to start

wrobelda

Member
Apr 13, 2022
46
2
13
Hi,

At some point podman stopped running inside my Alpine LXC containers. When starting an instance, I am getting an error:

Code:
podman run hello-world Hello
Resolved "hello-world" as an alias (/etc/containers/registries.conf.d/00-shortnames.conf)
Trying to pull quay.io/podman/hello:latest...
Getting image source signatures
Copying blob 81df7ff16254 done   |
Copying config 5dd467fce5 done   |
Writing manifest to image destination
WARN[0006] Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup path /libpod_parent/conmon: enabling controller cpuset: write /sys/fs/cgroup/libpod_parent/cgroup.subtree_control: no such file or directory
Error: crun: executable file `Hello` not found in $PATH: No such file or directory: OCI runtime attempted to invoke a command that was not found
sync

Importantly, I have nesting enabled on the container and unified cgroups2 on Proxmox hypervisor:
Code:
root@proxmox:/etc# cat /etc/mtab  | grep cgroup
cgroup2 /sys/fs/cgroup cgroup2 rw,nosuid,nodev,noexec,relatime 0 0

as well as in the container itself:
Code:
cat /etc/mtab  | grep cgroup
none /sys/fs/cgroup cgroup2 rw,nosuid,nodev,noexec,relatime 0 0

Podman inside container is configured to run as root. The cgroups available to container are:
Code:
root@proxmox:/etc#  ls -l /sys/fs/cgroup/lxc/115
total 0
-r--r--r-- 1 root root   0 Apr 22 18:33 cgroup.controllers
-r--r--r-- 1 root root   0 Apr 22 18:33 cgroup.events
-rw-r--r-- 1 root root   0 Apr 22 18:33 cgroup.freeze
--w------- 1 root root   0 Apr 22 18:33 cgroup.kill
-rw-r--r-- 1 root root   0 Apr 22 18:33 cgroup.max.depth
-rw-r--r-- 1 root root   0 Apr 22 18:33 cgroup.max.descendants
-rw-r--r-- 1 root root   0 Apr 22 18:33 cgroup.pressure
-rw-r--r-- 1 root root   0 Apr 22 18:33 cgroup.procs
-r--r--r-- 1 root root   0 Apr 22 18:33 cgroup.stat
-rw-r--r-- 1 root root   0 Apr 22 18:30 cgroup.subtree_control
-rw-r--r-- 1 root root   0 Apr 22 18:33 cgroup.threads
-rw-r--r-- 1 root root   0 Apr 22 18:33 cgroup.type
-rw-r--r-- 1 root root   0 Apr 22 18:33 cpu.idle
-rw-r--r-- 1 root root   0 Apr 22 18:33 cpu.max
-rw-r--r-- 1 root root   0 Apr 22 18:33 cpu.max.burst
-rw-r--r-- 1 root root   0 Apr 22 18:33 cpu.pressure
-rw-r--r-- 1 root root   0 Apr 22 18:30 cpuset.cpus
-r--r--r-- 1 root root   0 Apr 22 18:33 cpuset.cpus.effective
-rw-r--r-- 1 root root   0 Apr 22 18:33 cpuset.cpus.exclusive
-r--r--r-- 1 root root   0 Apr 22 18:33 cpuset.cpus.exclusive.effective
-rw-r--r-- 1 root root   0 Apr 22 18:33 cpuset.cpus.partition
-rw-r--r-- 1 root root   0 Apr 22 18:33 cpuset.mems
-r--r--r-- 1 root root   0 Apr 22 18:33 cpuset.mems.effective
-r--r--r-- 1 root root   0 Apr 22 18:30 cpu.stat
-r--r--r-- 1 root root   0 Apr 22 18:33 cpu.stat.local
-rw-r--r-- 1 root root   0 Apr 22 18:33 cpu.uclamp.max
-rw-r--r-- 1 root root   0 Apr 22 18:33 cpu.uclamp.min
-rw-r--r-- 1 root root   0 Apr 22 18:33 cpu.weight
-rw-r--r-- 1 root root   0 Apr 22 18:33 cpu.weight.nice
-r--r--r-- 1 root root   0 Apr 22 18:33 hugetlb.1GB.current
-r--r--r-- 1 root root   0 Apr 22 18:33 hugetlb.1GB.events
-r--r--r-- 1 root root   0 Apr 22 18:33 hugetlb.1GB.events.local
-rw-r--r-- 1 root root   0 Apr 22 18:33 hugetlb.1GB.max
-r--r--r-- 1 root root   0 Apr 22 18:33 hugetlb.1GB.numa_stat
-r--r--r-- 1 root root   0 Apr 22 18:33 hugetlb.1GB.rsvd.current
-rw-r--r-- 1 root root   0 Apr 22 18:33 hugetlb.1GB.rsvd.max
-r--r--r-- 1 root root   0 Apr 22 18:33 hugetlb.2MB.current
-r--r--r-- 1 root root   0 Apr 22 18:33 hugetlb.2MB.events
-r--r--r-- 1 root root   0 Apr 22 18:33 hugetlb.2MB.events.local
-rw-r--r-- 1 root root   0 Apr 22 18:33 hugetlb.2MB.max
-r--r--r-- 1 root root   0 Apr 22 18:33 hugetlb.2MB.numa_stat
-r--r--r-- 1 root root   0 Apr 22 18:33 hugetlb.2MB.rsvd.current
-rw-r--r-- 1 root root   0 Apr 22 18:33 hugetlb.2MB.rsvd.max
-rw-r--r-- 1 root root   0 Apr 22 18:33 io.max
-rw-r--r-- 1 root root   0 Apr 22 18:33 io.pressure
-rw-r--r-- 1 root root   0 Apr 22 18:33 io.prio.class
-r--r--r-- 1 root root   0 Apr 22 18:30 io.stat
-rw-r--r-- 1 root root   0 Apr 22 18:33 io.weight
-r--r--r-- 1 root root   0 Apr 22 18:30 memory.current
-r--r--r-- 1 root root   0 Apr 22 18:33 memory.events
-r--r--r-- 1 root root   0 Apr 22 18:33 memory.events.local
-rw-r--r-- 1 root root   0 Apr 22 18:30 memory.high
-rw-r--r-- 1 root root   0 Apr 22 18:33 memory.low
-rw-r--r-- 1 root root   0 Apr 22 18:30 memory.max
-rw-r--r-- 1 root root   0 Apr 22 18:33 memory.min
-r--r--r-- 1 root root   0 Apr 22 18:33 memory.numa_stat
-rw-r--r-- 1 root root   0 Apr 22 18:33 memory.oom.group
-r--r--r-- 1 root root   0 Apr 22 18:33 memory.peak
-rw-r--r-- 1 root root   0 Apr 22 18:33 memory.pressure
--w------- 1 root root   0 Apr 22 18:33 memory.reclaim
-r--r--r-- 1 root root   0 Apr 22 18:30 memory.stat
-r--r--r-- 1 root root   0 Apr 22 18:30 memory.swap.current
-r--r--r-- 1 root root   0 Apr 22 18:33 memory.swap.events
-rw-r--r-- 1 root root   0 Apr 22 18:33 memory.swap.high
-rw-r--r-- 1 root root   0 Apr 22 18:30 memory.swap.max
-r--r--r-- 1 root root   0 Apr 22 18:33 memory.swap.peak
-r--r--r-- 1 root root   0 Apr 22 18:33 memory.zswap.current
-rw-r--r-- 1 root root   0 Apr 22 18:33 memory.zswap.max
-rw-r--r-- 1 root root   0 Apr 22 18:33 memory.zswap.writeback
-r--r--r-- 1 root root   0 Apr 22 18:33 misc.current
-r--r--r-- 1 root root   0 Apr 22 18:33 misc.events
-rw-r--r-- 1 root root   0 Apr 22 18:33 misc.max
drwxrwxr-x 7 root 100000 0 Apr 22 18:31 ns
-r--r--r-- 1 root root   0 Apr 22 18:33 pids.current
-r--r--r-- 1 root root   0 Apr 22 18:33 pids.events
-rw-r--r-- 1 root root   0 Apr 22 18:33 pids.max
-r--r--r-- 1 root root   0 Apr 22 18:33 pids.peak
-r--r--r-- 1 root root   0 Apr 22 18:33 rdma.current
-rw-r--r-- 1 root root   0 Apr 22 18:33 rdma.max

And they are visible from within the container:
Code:
ls -al /sys/fs/cgroup
total 0
drwxrwxr-x    7 nobody   root             0 Apr 22 16:31 .
drwxr-xr-x   10 nobody   nobody           0 Apr 22 16:30 ..
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 cgroup.controllers
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 cgroup.events
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 cgroup.freeze
--w-------    1 nobody   nobody           0 Apr 22 16:30 cgroup.kill
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 cgroup.max.depth
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 cgroup.max.descendants
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 cgroup.pressure
-rw-rw-r--    1 nobody   root             0 Apr 22 16:30 cgroup.procs
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 cgroup.stat
-rw-rw-r--    1 nobody   root             0 Apr 22 16:31 cgroup.subtree_control
-rw-rw-r--    1 nobody   root             0 Apr 22 16:30 cgroup.threads
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 cgroup.type
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 cpu.idle
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 cpu.max
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 cpu.max.burst
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 cpu.pressure
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 cpu.stat
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 cpu.stat.local
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 cpu.uclamp.max
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 cpu.uclamp.min
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 cpu.weight
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 cpu.weight.nice
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 cpuset.cpus
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 cpuset.cpus.effective
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 cpuset.cpus.exclusive
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 cpuset.cpus.exclusive.effective
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 cpuset.cpus.partition
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 cpuset.mems
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 cpuset.mems.effective
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 hugetlb.1GB.current
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 hugetlb.1GB.events
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 hugetlb.1GB.events.local
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 hugetlb.1GB.max
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 hugetlb.1GB.numa_stat
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 hugetlb.1GB.rsvd.current
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 hugetlb.1GB.rsvd.max
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 hugetlb.2MB.current
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 hugetlb.2MB.events
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 hugetlb.2MB.events.local
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 hugetlb.2MB.max
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 hugetlb.2MB.numa_stat
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 hugetlb.2MB.rsvd.current
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 hugetlb.2MB.rsvd.max
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 io.max
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 io.pressure
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 io.prio.class
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 io.stat
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 io.weight
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.current
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.events
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.events.local
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.high
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.low
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.max
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.min
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.numa_stat
-rw-rw-r--    1 nobody   root             0 Apr 22 16:30 memory.oom.group
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.peak
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.pressure
-rw-rw-r--    1 nobody   root             0 Apr 22 16:30 memory.reclaim
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.stat
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.swap.current
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.swap.events
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.swap.high
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.swap.max
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.swap.peak
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.zswap.current
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.zswap.max
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 memory.zswap.writeback
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 misc.current
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 misc.events
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 misc.max
drwxr-xr-x    2 root     root             0 Apr 22 16:30 openrc.crond
drwxr-xr-x    2 root     root             0 Apr 22 16:30 openrc.dropbear
drwxr-xr-x    2 root     root             0 Apr 22 16:30 openrc.networking
drwxr-xr-x    2 root     root             0 Apr 22 16:30 openrc.podman
drwxr-xr-x    2 root     root             0 Apr 22 16:30 openrc.syslog
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 pids.current
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 pids.events
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 pids.max
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 pids.peak
-r--r--r--    1 nobody   nobody           0 Apr 22 16:30 rdma.current
-rw-r--r--    1 nobody   nobody           0 Apr 22 16:30 rdma.max

(notice the nobody user, though).

Container config is:

Code:
arch: amd64
cores: 4
features: fuse=1,keyctl=1,nesting=1
hookscript: local:snippets/bridgefix.sh
hostname: ctr-rssbridge
memory: 2048
net0: name=eth0,bridge=vmbr0,hwaddr=DE:8C:9C:93:B5:20,ip=dhcp,type=veth
onboot: 1
ostype: alpine
rootfs: local-nvme-zfs:subvol-115-disk-0,size=8G
swap: 512
tty: 1
unprivileged: 1

What am I missing here? This used to work before switching to cgroups v2 and I can't find any hint. I tried bunch of AI suggestions, nothing works.

Can someone please hopefully provide a comprehensive answer here and, perhaps, could Proxmox please maintain a wiki page for this? This issue comes up a lot and it would be useful to know what is the current approach to things, considering the not infrequent changes around containers/cgroups. I am sure the community would appreciate it tremendously!
 
Last edited:
At some point podman stopped running inside my Alpine LXC containers.
[...] could Proxmox please maintain a wiki page for this? This issue comes up a lot and it would be useful to know what is the current approach to things, considering the not infrequent changes around containers/cgroups. I am sure the community would appreciate it tremendously!

They already do, but people either do not RTFM and/or do not give a shit and simply ignore it...:
If you want to run application containers, for example, Docker images, it is recommended that you run them inside a Proxmox QEMU VM.
https://pve.proxmox.com/wiki/Linux_Container
If you want to run application containers, for example, Docker images, it is best to run them inside a Proxmox QEMU VM.
https://pve.proxmox.com/wiki/FAQ -> Point 13

For reference:
https://bugzilla.proxmox.com/show_bug.cgi?id=4712
 
They already do, but people either do not RTFM and/or do not give a shit and simply ignore it...:
I do not appreciate that condescending, passive-aggressive, self-serving remarks which actually bends what the documentation factually says — i.e. it says it's not recommended to run nested containers, not prohibits it as something outright wrong to do.

ESPECIALLY becuase Nesting (and keyctl) in LXC is an option — and ESPECIALLY becuase Nesting was recently enabled by default:
https://pve.proxmox.com/wiki/Linux_Container

So right back at you with RTFM. It's a gray area as evidenced by Proxmox own docs, which is specifically why I am asking that it is documented well.

But seriously f*** o** with that houlier than thou attitude.
 
Last edited:
you don't get to yell about the lack of documentation of how to do that thing.

Oh now I was „yelling” about it? Like I did here?

Can someone please hopefully provide a comprehensive answer here and, perhaps, could Proxmox please maintain a wiki page for this?

Again, I do not appreciate the condescending and the manipulative comments twisting my words.
 
Oh now I was „yelling” about it? Like I did here?
Please calm down. Some things seem to have come across more offensive to you, than they read to me. Just to be clear: When we say “is not recommended” problems like the ones you ran into are to be expected. We “don't recommend” things, when we think they are just a flat-out bad idea, which are bound to break eventually. As such, we don't provide documentation for such use-cases, as that would give the appearance of us endorsing them.

However, that being said, it is not plain “forbidden”. Proxmox VE is a Linux system, and you can always do whatever you want. We don't prohibit things that are generally a bad idea, because that is a never ending task and might be frustrating for more experienced users.

So please, when you read “not recommended” in the docs, interpret it as “I am on my own if I do this.” Especially when a perfectly acceptable alternative is mentioned, such as in this case (i.e., using a VM to run Docker/Podman containers, instead of an LXC).

Also note that nesting has different meanings. nesting in an LXC context remains limited to running LXC containers for the most part (e.g., Docker/Podman is still *not* recommended). The option mostly is about exposing /sys and /proc from the host, into the container.
 
Last edited:
I don't use podman, but your error message is

Code:
Error: crun: executable file `Hello` not found in $PATH: No such file or directory: OCI runtime attempted to invoke a command that was not found

which happens because you are running "hello-world Hello" with podman.

Could you try with

Code:
# podman run hello-world

Note: if I try what you are trying, but with (working) docker I also get:

Code:
Error: crun: executable file `Hello` not found in $PATH: No such file or directory: OCI runtime attempted to invoke a command that was not found

so the error is definitely (I think :), not with podman but with your usage of the container.
 
  • Like
Reactions: Johannes S
I don't use podman, but your error message is

Code:
Error: crun: executable file `Hello` not found in $PATH: No such file or directory: OCI runtime attempted to invoke a command that was not found
The hello-world is just an example I used and indeed there's an error in how I was using it example. Fixing the command line actually does make podman return the expected result, albeit with the cgroupfs WARN. However, my actual issue at hand here is precesily with that cgroupfs WARN, which somehow turns into in ERROR for my other, "proper" containers, e.g.:

Code:
go-sync# podman-compose up -d
Error: creating cgroup path /libpod_parent/bbac7d6a4f54346ae5c6ac8adcc86b10e6334dc0d531330a26cc38656c3f3d61: enabling controller cpuset: write /sys/fs/cgroup/libpod_parent/cgroup.subtree_control: no such file or directory
Error: no pod with name or ID pod_go-sync found: no such pod
Error: no container with name or ID "go-sync_dynamo-local_1" found: no such container
Error: no pod with name or ID pod_go-sync found: no such pod
Error: no container with name or ID "go-sync_redis_1" found: no such container
Error: no pod with name or ID pod_go-sync found: no such pod
Error: no container with name or ID "go-sync_dev_1" found: no such container
Error: no pod with name or ID pod_go-sync found: no such pod
Error: no container with name or ID "go-sync_web_1" found: no such container

I think the difference in WARN vs ERROR could be maybe due to using podman vs podman-compose somehow. I need to look into this.

EDIT. Yup, this is due to podman-compose defaulting to running in pods and Podman > 5.x enforcing cgroups v2 availability in pods. The workaround is running with `--no-pod` or adding
Code:
x-podman:
    in_pod: false
to the compose YAML. This obviously still doesn't actually fix cgroups v2 not getting delegated properly.
 
Last edited:
  • Like
Reactions: grencez
Please calm down. Some things seem to have come across more offensive to you
If someone resorts to manipulation to make my question come off as demanding ("you don't get to yell") even though it wasn't at all, then I am sure as hell going to be offended and I be the judge for that, not you.

I also do not appreciate your extra patronizing here, this does NOT send a right message when you condone the behavior of other members here while you condescend a person who actually got attacked for no reason. I was perplexed about the unprovoked hostility in this community and even more surprised you see fit defending it.

However, that being said, it is not plain “forbidden”.
Well, this was my take as well and yet the other members here thought it was in good fashion to passive-aggressively RTFM me. This is a discussion forum, not an official support channel, it should be expected the questions on hackish, non-standard usages are welcome here. I mean you don't support running macOS under Proxmox and yet somehow there's dozens of threads about it, meahwhile running a Podman nested in LXC crosses the line for some? SMH, seriously.
 
Last edited:
I also do not appreciate your extra patronizing here, this does NOT send a right message when you condone the behavior of other members here while you condescend a person who actually got attacked for no reason. I was perplexed about the unprovoked hostility in this community and even more surprised you see fit defending it.

I see just one person here who is hostile without being provoked and it's neither Neobin nor BobhWasatch. I understand that you are not happy with their answers since you hoped for a different outcome but nothing in their answers were in any way ad hominem.

Well, this was my take as well and yet the other members here thought it was in good fashion to passive-aggressively RTFM me. This is a discussion forum, not an official support channel, it should be expected the questions on hackish, non-standard usages are welcome here. I mean you don't support running macOS under Proxmox and yet somehow there's dozens of threads about it, meahwhile running a Podman nested in LXC crosses the line for some? SMH, seriously.
As far I know no topic is outright forbidden here (expect hints how to disable the nag screen), but you also can't forbid other people to discourage things which often enough turned out to be a bad idea. If you read the forum you will find enough threads with people which broke their application containers inside lxc after an update.
In your original post you didn't mentioned that you were aware of the "hackish, non-standard usage" of your experiment but instead wrote that you wished for some guidance in the docs. So I wouldn't consider it "passive-agressive" or "hollier than you" to refer you to the point in the documentation where the subject is covered. If Neobin wouldn't have written his answer, I would have written the same. Now if you consider this passive-agressive please say so, because then I know that I won't particpate in any discussions with you in the future, no point in wasting both of our time.

Now while you might be aware of your non-default usage in the past there were more than enough threads with people who did something similiar but were not aware of the issues related to running application containers inside lxc. Often enough they were quite happy after they were told that a vm with docker or podman might work better for their intended usecase. Guess what: They tried exactly that (using a VM instead), got their wished result and all was good. So I don't think that a hint in that regard is "hostile".
 
Last edited:
  • Like
Reactions: cheiss and Neobin
This is a discussion forum, not an official support channel, it should be expected the questions on hackish, non-standard usages are welcome here.
Initial post:
[...] could Proxmox please maintain a wiki page for this?

Those two statements contradict each other.
  1. Might be subjective/discussable, but I would quite call this official (read: not a third-party) community forum a (non-obligatory, of course) official support channel, since the Proxmox developers themselves are around and provide individual first-hand support on a very regular basis, make announcements and take / react to feedback, and not only the community members.
  2. But even with point 1 aside, at the latest, a wiki page (on the official Proxmox wiki, of course) definitely is / would be something official (= non-hackish, standard usages).


I do not appreciate [...]
Again, I do not appreciate [...]
I also do not appreciate [...]

What I would appreciate: Less drama, more technical facts/discussions...
 
  • Like
Reactions: Johannes S
If someone resorts to manipulation to make my question come off as demanding ("you don't get to yell") even though it wasn't at all, then I am sure as hell going to be offended and I be the judge for that, not you.
You do get to decide what is and is not offensive to you, but I get to decide what is and isn't appropriate speech and tone for discussions here and this:
But seriously f*** o** with that houlier than thou attitude.
is decidedly not appropriate. Especially after the only thing that was pointed out to you was that the manual and Wiki explicitly discourage such setups.


Well, this was my take as well and yet the other members here thought it was in good fashion to passive-aggressively RTFM me. This is a discussion forum, not an official support channel, it should be expected the questions on hackish, non-standard usages are welcome here. I mean you don't support running macOS under Proxmox and yet somehow there's dozens of threads about it, meahwhile running a Podman nested in LXC crosses the line for some? SMH, seriously.
As you might have noticed, this thread is also still active, and I haven't closed it down. So please get back on track and if someone tells you that your setup is not supported and/or discouraged by Proxmox, especially if they aren't aware that you know this already, don't tell them to “f*** o**”. Thanks!