First of all I gotta say that Proxmox is a REALLY great project and I love using it ever since I've heard of it.
Sadly I have only one issue with it at the moment. That is that networking to a degree doesn't work well with Podman and Proxmox. Namely during the step where the podman...
I'm trying to pass through an existing LV to a container. I believe this means I need to use an image based, storage backed mountpoint as described here, which mentions a special syntax for creating a new LV, but there is no mention of the option to mount an existing LV.
I have created a new...
A friend starting into using Proxmox asked me a question today which I didn't offhand know the answer to and couldn't find a definitive answer on here.
How do you know the LXC templates are trustworthy?
LXC containers (as in the templates pulled down from pveam), I'm guessing they're pulled...
I have been using the openSUSE 15.3 container template for a while now and it has been working great, but after upgrade to 7.2x the apparmor do not work in the containers, old as new ones, it seems to work in the Ubuntu 20.04 container i also have running.
i tried to create a new container...
I am having an unusual problem where, when I migrate a container from node 1 to node 2, it breaks. I migrate the container, the container shows up in node 2's VM list just fine, but when I try to start the container, it doesn't boot. It comes up with this errer
"TASK ERROR: unable to open file...
Hello - Complete noob here so apologies from the get go.
I am trying to use DeepStack AI (https://docs.deepstack.cc/using-deepstack-with-nvidia-gpus/index.html) and trying to follow the directions there to get the docker container to run. I haven't even gotten to running the container because I...
I'm given a .squashfs image which contains a root filesystem in squashfs format.
Shall I be able to "import" such image and run that as a container directly?
Or shall I rather mount the image from within an LXC?
Let me know,
I'm having some trouble mapping the UID/GIDs between two LXC containers: plex and deluge.
Here are the configs:
# UID mapping, plex uid is 998
lxc.idmap: u 0 100000 998
lxc.idmap: u 998 1234 1
lxc.idmap: u 999 100999 63536
# GID mapping, plex gid is 998
lxc.idmap: g 0 100000 998...
Recently I cannot create any new containers or move volumes because of a ZFS list timeout. The specific error is:
TASK ERROR: command 'zfs list -o name,volsize,origin,type,refquota -t volume,filesystem -Hrp' failed: got timeout
Can anyone provide some ideas on how to fix this? Thank you.
I wanted to bounce an idea off you and wonder what people think about the feasibility and/or possible issues.
TL;DR: Trying to get containers to move to other hosts but still access files using a Mount Point from a ZFS store on the host they came from. At the same time, requiring no...
I'm currently running 2 standalone Proxmox 6 nodes and I want them to be part of the same cluster.
Containers' IDs are not overlapping from one node to the other.
Is there a way to create a cluster with those two nodes without dump->transfer->import every container?
I recently just added a second server to become my new main server. I have 2x 500gb SSDs in a ZFS raid1 created in the installer (ssds each plug directly into mobo sata). How do I enable storage of VMs and Containers on my drives as I only have the two SSDs for proxmox to access.
First of all this is my Proxmox System Rep:
# pveversion --verbose
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
We have a Proxmox Server (v. 6.2-4), which hosts a few VMs and containers. Unfortunately, the server OS drive seems to be inaccessible after a power failure. We have replaced the OS drive and installed a new Proxmox environment (v 6.2-4) from scratch.
We've plugged in the old storage...
I've just installed proxmox and tried several container images. None of them will start on either of the two of my clustered servers.
Here is the error I get.
root@kvm2:~# lxc-start -n 103 -F -l DEBUG -o /tmp/lxc.log
lxc-start: 103: conf.c: run_buffer: 323 Script exited with status 255...
How to restart lxcfs, or, rather, how to resurrect /proc on running containers?
If lxcfs gets restarted by some or other reason all the CT's get choked:
# ps awuxf
Error: /proc must be mounted
To mount /proc at boot you need an /etc/fstab line like:
proc /proc proc defaults
I have done this twice with a WD green and purple(both died, green was not suited for 24/7 anyways and purple was 2nd hand and I got cheap).
Now I've got an SSD, where it initializes /dev/pve/data to make use of a big chunk of the SSD's capacity, so I what I usually do is make use of it via...
I have an application that requires to use uid/gids starting from 70000 (http://vmm.localdomain.org/) but that seems to be not possible if you use unprivileged CTs.
Is there any way of doing this?