Hi Proxmox Community.
We have a problem with our LXC Backup on a single Proxmox Node.
We have configured two LXC Container, where one LXC Container alway backups fine without problem, but the second LXC Container have some issues.
Both backups to a NAS with different CIFS Shares.
Here is the...
I have a 5 node Proxmox Virtual Environment currently running at version 5.4-13
I have a 2 node PMG service that I also run based on LXC containers.
Am I able to update the existing LXC containers (as supplied by the Proxmox team) to the latest version of Buster and Proxmox 6.1 while still...
Hi, I wanted to disable Apparmor on one of my Containers, however the wiki sais to use:
lxc.aa_profile = unconfined
But the container wouldn't start with this option, I had to use the following to get it to work:
lxc.apparmor.profile = unconfined
Here's the link to the wiki article (under...
I have been trying to figure this one out by my self but I think I need some help with translation.
One of nested Proxmox servers that has 4 LXC servers running (each using ~1G RAM) claims that it is using 14G out of 16G.
What is the rest of the memory being used?
root@vh0:~# cat /proc/meminfo...
I have created a container based on a Debian Buster template to use as a VPN server.
The problem I have is that only port 22 is available towards that machine and I'm puzzled on how to solve this (fairly new to Proxmox/containers and Linux is not my strongest point)
I have an LXContainer that I want to migrate to node3. If I have HA enabled for that Container, it will always migrate to node1, regardless of the selected destination. The container is on shared ceph storage.
I have Proxmox 6 Server with LXC Containers. If I added some network adapter to one container everything works fine, mac address is on default settings (auto). But no mac address will generate. In syslog I have this error message:
systemd-udevd: link_config: autonegotiation is...
We configured cpulimit=0.5 for all container, and after that server load avarage are increased to around 300 (nothing else changed). But the server itself fast and cpu usage is under 20%.
I think, because the cpulimit, processes in container have to wait for cpu, and linux kernel...
I'm backing up an LXC with rootfs and 2 extra ZFS mount points.
vpool/subvol-120-disk-0 909M 7.11G 909M /vpool/subvol-120-disk-0
vpool/subvol-120-disk-1-mysql-data 14.2G 25.8G 14.2G /vpool/subvol-120-disk-1-mysql-data
Hi to all,
I test proxmox since many month without trouble.
about new needs, i have done some test with last lxc image available from proxmox download repository.
Actually Centos 8 and Fedora 30, and the both doesn't works fine. There so buggy effects.
I don't know if it's my proxmox or the...
first time poster here, nice to meet you all.
We are trying to run Snort as an NIDS in a Container on our Proxmox. We have dedicated NICs on our server for each container (theoretically) and a Cisco 3750-Series Switch that is connected to a different Switch (which we can't manage)...
Oct 11 11:10:29 pve-lap systemd: Started PVE LXC Container: 118.
Oct 11 11:10:29 pve-lap pvedaemon: <root@pam> end task UPID:pve-lap:00000877:10AC5DF9:5DA04703:vzstart:118:root@pam: OK
Oct 11 11:10:30 pve-lap audit: AVC apparmor="DENIED" operation="mount" info="failed flags...
I create container but still get incorrect login each time by using username as root
with different passwords but still the same.
lxc-monitord 20191002161236.771 INFO monitor - monitor.c:lxc_monitor_sock_name:212 - Using monitor socket name...
I am mounting several NFS shares. For my LXC and QEMU images I wish to mount the NFS share as synchronous.
QEMU guests are working well on the NFS sync share.
For LXC, however, I noticed my sync writes dropped down below 10 MB/s and would hang for several minutes after writing test files.
I'm on Proxmox 6
I have images based on Debian 10, yesterday I stopped one and it never restarted:
Job for firstname.lastname@example.org failed because the control process exited with error code.
See "systemctl status email@example.com" and "journalctl -xe" for details.
In Proxmox PVE 6.X I've noticed some odd behavior within the GUI when attempting to migrate an NFS mounted LXC container.
I wonder if someone can repeat so I might be able to determine if this is an internal issue on my end or a bug I should report.
Note: Migration of the Container is working...
I'd like to setup unprivileged containers with glusterfs mount in it. The idea is to have 1 ansible controller in each datacenter, so, in case we lost a datacenter connectivity, we still be able to run playbooks from the other datacenter.
So, my lxc.idmap does the job for the bind mount...
i need to install targetcli-fb in lxc container for iSCSI service.
By installing targetcli-fb on host i got all kernel modules in lxc and targetcli is working without errors.
My first step is to create a file-based backstore but this ended with an error. Unfortunately targetcli writes...