So, some more poking around system logs suggests that this happens every time Ubuntu runs the php sessionclean script to clean up php sessions. These two containers must be the only ones running php.
Does anyone know of a way to fix this?
I'm not very good with how Apparmor works, so I was hoping someone might help me solve this one.
Two of my many LXC containers, ID 110 and ID 170 are resulting in an absolute spamming of DMESG as follows:
Please see this pastebin. It was too much to post in a message here.
I figured it out.
I forgot I needed to run "update-initramfs -u" after makinga changes to udev config files to make everything work right.
Rebooted, and now everything uses the new (hopefully static) device names...
I'll leave this thread up here in case anyone else runs into the same...
So, to get the machine working temporarily until I have all these devices figured out, I edited /etc/network/interfaces and used the new device names, followed by a reboot.
After reboot, I now have yet another old naming convention device, eth1, instead of its recent name, enp13s0f0...
On a whim I decided to make a back up copy of my existing 70-persistent-net.rules file, and delete the one in the /etc/udev/rules.d location and reboot to see what happened.
My theory was that without this file assigning Ethernet device names, all of the devices would instead use the...
I have been running a somewhat complex network setup on my server for some time:
2x Copper Gigabit Ethernet on Server Board
4x Copper Gigabit Ethernet (Intel 4x PRO/1000 NIC)
1x 10GBaseT Intel 82598EB
I'm not going to go into the details of what they are used for, as it is not...
I have an existing very complicated container, with lots of interfaces and mounts. It's running on Ubuntu 14.04 LTS which is about to go EOL.
I have tried ZFS snapshotting the existing containers rpool/subvol-110-disk-1 location and doing an in place upgrade but it...
Good to know, thank you.
Maybe I am confused. Is it only the desktop version of 18.04 that defaults to netplan?
I am curious. How does it determine the OS version? Does it parse the containers /etc/lsb-release?
So, I know you configure the network interfaces for new containers in the web interface (or by editing the corresponding config file in /etc/pve/lxc) but how does it work when you actually power up the container?
The reason I ask is, I have a bunch of Ubuntu 14.04 based containers...
Thanks for the help.
I rebooted the server today, and it appears to be running normally again.
Hopefully a 4.18+ PVE Kernel that fixes this issue will be made available quickly.
I mean, I could easily either compile, download a mainline binary kernel or add the sources for the kernel from...
I will have to check this a little later.
Does a reboot temporarily solve the issue? I could probably do that overnight, and then go another few months without running into it again.
My use case doesn't require restarting containers regularly. They start once when the server goes up...
I am on the following kernel:
Linux proxmox 4.15.18-5-pve #1 SMP PVE 4.15.18-24 (Thu, 13 Sep 2018 09:15:10 +0200) x86_64 GNU/Linux
I just shut down a container today using "pct stop 200".
I went to start it back up again with "pct start 200" and this process just sits there doing nothing...
Is this advisable?
The reason I ask is, I'm not sure I fully understand how the PVE frontend configures the containers network and other settings.
14.04 and 16.04 use ifup/down and are thus configured in /etc/network/interfaces, but 18.04 replaces ifup/down with netplan, which is...
Thank you for your suggestion.
Part of my problem is that this container hosts a back end service that relies on an external device on a dedicated Ethernet interface (10.0.3.x) and storage on yet another dedicated interface (10.0.2.x), and then has frontends on my local network accessing it...
I have a quick question.
I have a container with a rather complex configuration with many mounted folders, and several Ethernet devices. I'd like the keep the container as is, but replace the template it is running off of with a different distribution.
Is this possible?
I don't have the liberty to reboot at the time to troubleshoot exporting the path.
I'll try using the full path of the smartctl binary and see if that works ext time I reboot.
Do you know if the smartctl command requires smartd to be running?
I have a ZFS pool in my proxmox box consisting of Seagate Enterprise drives.
They accept the smartctl -l scterc command to set the error recovery timeout, but unfortunately for some reason it is not persistent after reboot.
While reboots are infrequent, I don't want to forget to...
This seems to have done the trick, thank you.
Regarding the delayed notifications, I did some research and it turns out both my ISP and my VPN provider block port 25 to reduce the risk of email spamming, so I am not sure how they are getting through at all...
I'll take a look, thanks, but I don't think this is it. It keeps resending warnings about the same disks (that are no longer in the system), with newer date stamps.
I suspect what is happening is that smartd hasn't noticed that they have been replaced due to not handling hot swaps well, so it...
First let me explain my setup. My Proxmox box boots off of a ZFS mirror of two 500GB SSD's. I also have a secondary ZFS pool consisting of 12 spinning disks, 2 SSD SLOG ZIL devices and 2 SSD L2ARC devices, which I use for data storage. I am in the middle of a slow project to one...